Learned Extragradient ISTA with Interpretable Residual Structures for Sparse Coding
نویسندگان
چکیده
Recently, the study on learned iterative shrinkage thresholding algorithm (LISTA) has attracted increasing attentions. A large number of experiments as well some theories have proved high efficiency LISTA for solving sparse coding problems. However, existing methods are all serial connection. To address this issue, we propose a novel extragradient based (ELISTA), which residual structure and theoretical guarantees. Moreover, most use soft function, been found to cause estimation bias. Therefore, function ELISTA instead thresholding. From perspective, prove that our method attains linear convergence. Through ablation experiments, improvements network verified in practice. Extensive empirical results verify advantages method.
منابع مشابه
Trainable ISTA for Sparse Signal Recovery
In this paper, we propose a novel sparse signal recovery algorithm called Trainable ISTA (TISTA). The proposed algorithm consists of two estimation units such as a linear estimation unit and a minimum mean squared error (MMSE) estimator-based shrinkage unit. The estimated error variance required in the MMSE shrinkage unit is precisely estimated from a tentative estimate of the original signal. ...
متن کاملNonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a L...
متن کاملSparse Coding for Learning Interpretable Spatio-Temporal Primitives
Sparse coding has recently become a popular approach in computer vision to learn dictionaries of natural images. In this paper we extend the sparse coding framework to learn interpretable spatio-temporal primitives. We formulated the problem as a tensor factorization problem with tensor group norm constraints over the primitives, diagonal constraints on the activations that provide interpretabi...
متن کاملInterpretable sparse SIR for functional data
This work focuses on the issue of variable selection in functional regression. Unlike most work in this framework, our approach does not select isolated points in the definition domain of the predictors, nor does it rely on the expansion of the predictors in a given functional basis. It provides an approach to select full intervals made of consecutive points. This feature improves the interpret...
متن کاملSPINE: SParse Interpretable Neural Embeddings
Prediction without justification has limited utility. Much of the success of neural models can be attributed to their ability to learn rich, dense and expressive representations. While these representations capture the underlying complexity and latent trends in the data, they are far from being interpretable. We propose a novel variant of denoising k-sparse autoencoders that generates highly ef...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2021
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v35i10.17032